30 research outputs found

    Classification using semantic feature and machine learning: Land-use case application

    Get PDF
    Land cover classification has interested recent works especially for deforestation, urban are monitoring and agricultural land use. Traditional classification approaches have limited accuracy especially for non-heterogeneous land cover. Thus, using machine may improve the classification accuracy. The presented paper deals with the land-use scene recognition on very high-resolution remote sensing imagery. We proposed a new framework based on semantic features, handcrafted features and machine learning classifiers decisions. The method starts by semantic feature extraction using a convolutional neural network. Handcraft features are also extracted based on color and multi-resolution characteristics. Then, the classification stage is processed by three learning machine algorithms. The final classification result performed by majority vote algorithm. The idea behind is to take advantages from semantic features and handcrafted features. The second scope is to use the decision fusion to enhance the classification result. Experimentation results show that the proposed method provides good accuracy and trustable tool for land use image identification

    A new feature extraction approach based on non linear source separation

    Get PDF
    A new feature extraction approach is proposed in this paper to improve the classification performance in remotely sensed data. The proposed method is based on a primary sources subset (PSS) obtained by nonlinear transform that provides lower space for land pattern recognition. First, the underlying sources are approximated using multilayer neural networks. Given that, Bayesian inferences update unknown sources’ knowledge and model parameters with information’s data. Then, a source dimension minimizing technique is adopted to provide more efficient land cover description. The support vector machine (SVM) scheme is developed by using feature extraction. The experimental results on real multispectral imagery demonstrates that the proposed approach ensures efficient feature extraction by using several descriptors for texture identification and multiscale analysis. In a pixel based approach, the reduced PSS space improved the overall classification accuracy by 13% and reaches 82%. Using texture and multi resolution descriptors, the overall accuracy is 75.87% for the original observations, while using the reduced source space the overall accuracy reaches 81.67% when using jointly wavelet and Gabor transform and 86.67% when using Gabor transform. Thus, the source space enhanced the feature extraction process and allow more land use discrimination than the multispectral observations

    Deep Learning and Uniform LBP Histograms for Position Recognition of Elderly People with Privacy Preservation

    Get PDF
    For the elderly population, falls are a vital health problem especially in the current context of home care for COVID-19 patients. Given the saturation of health structures, patients are quarantined, in order to prevent the spread of the disease. Therefore, it is highly desirable to have a dedicated monitoring system to adequately improve their independent living and significantly reduce assistance costs. A fall event is considered as a specific and brutal change of pose. Thus, human poses should be first identified in order to detect abnormal events. Prompted by the great results achieved by the deep neural networks, we proposed a new architecture for image classification based on local binary pattern (LBP) histograms for feature extraction. These features were then saved, instead of saving the whole image in the series of identified poses. We aimed to preserve privacy, which is highly recommended in health informatics. The novelty of this study lies in the recognition of individuals’ positions in video images avoiding the convolution neural networks (CNNs) exorbitant computational cost and Minimizing the number of necessary inputs when learning a recognition model. The obtained numerical results of our approach application are very promising compared to the results of using other complex architectures like the deep CNNs

    A Deep Learning-Based Framework for Feature Extraction and Classification of Intrusion Detection in Networks

    Get PDF
    An intrusion detection system, often known as an IDS, is extremely important for preventing attacks on a network, violating network policies, and gaining unauthorized access to a network. The effectiveness of IDS is highly dependent on data preprocessing techniques and classification models used to enhance accuracy and reduce model training and testing time. For the purpose of anomaly identification, researchers have developed several machine learning and deep learning-based algorithms; nonetheless, accurate anomaly detection with low test and train times remains a challenge. Using a hybrid feature selection approach and a deep neural network- (DNN-) based classifier, the authors of this research suggest an enhanced intrusion detection system (IDS). In order to construct a subset of reduced and optimal features that may be used for classification, a hybrid feature selection model that consists of three methods, namely, chi square, ANOVA, and principal component analysis (PCA), is applied. These methods are referred to as “the big three.” On the NSL-KDD dataset, the suggested model receives training and is then evaluated. The proposed method was successful in achieving the following results: a reduction of input data by 40%, an average accuracy of 99.73%, a precision score of 99.75%, an F1 score of 99.72%, and an average training and testing time of 138% and 2.7 seconds, respectively. The findings of the experiments demonstrate that the proposed model is superior to the performance of the other comparison approaches.publishedVersio

    Malware Detection in Internet of Things (IoT) Devices Using Deep Learning

    Get PDF
    Internet of Things (IoT) devices usage is increasing exponentially with the spread of the internet. With the increasing capacity of data on IoT devices, these devices are becoming venerable to malware attacks; therefore, malware detection becomes an important issue in IoT devices. An effective, reliable, and time-efficient mechanism is required for the identification of sophisticated malware. Researchers have proposed multiple methods for malware detection in recent years, however, accurate detection remains a challenge. We propose a deep learning-based ensemble classification method for the detection of malware in IoT devices. It uses a three steps approach; in the first step, data is preprocessed using scaling, normalization, and de-noising, whereas in the second step, features are selected and one hot encoding is applied followed by the ensemble classifier based on CNN and LSTM outputs for detection of malware. We have compared results with the state-of-the-art methods and our proposed method outperforms the existing methods on standard datasets with an average accuracy of 99.5%.publishedVersio

    An Empirical Assessment of Performance of Data Balancing Techniques in Classification Task

    No full text
    Many real-world classification problems such as fraud detection, intrusion detection, churn prediction, and anomaly detection suffer from the problem of imbalanced datasets. Therefore, in all such classification tasks, we need to balance the imbalanced datasets before building classifiers for prediction purposes. Several data-balancing techniques (DBT) have been discussed in the literature to address this issue. However, not much work is conducted to assess the performance of DBT. Therefore, in this research paper we empirically assess the performance of the data-preprocessing-level data-balancing techniques, namely: Under Sampling (OS), Over Sampling (OS), Hybrid Sampling (HS), Random Over Sampling Examples (ROSE), Synthetic Minority Over Sampling (SMOTE), and Clustering-Based Under Sampling (CBUS) techniques. We have used six different classifiers and twenty-five different datasets, that have varying levels of imbalance ratio (IR), to assess the performance of DBT. The experimental results indicate that DBT helps to improve the performance of the classifiers. However, no significant difference was observed in the performance of the US, OS, HS, SMOTE, and CBUS. It was also observed that performance of DBT was not consistent across varying levels of IR in the dataset and different classifiers

    Lightweight AI Framework for Industry 4.0 Case Study: Water Meter Recognition

    No full text
    The evolution of applications in telecommunication, network, computing, and embedded systems has led to the emergence of the Internet of Things and Artificial Intelligence. The combination of these technologies enabled improving productivity by optimizing consumption and facilitating access to real-time information. In this work, there is a focus on Industry 4.0 and Smart City paradigms and a proposal of a new approach to monitor and track water consumption using an OCR, as well as the artificial intelligence algorithm and, in particular the YoLo 4 machine learning model. The goal of this work is to provide optimized results in real time. The recognition rate obtained with the proposed algorithms is around 98%

    Identification, 3D-Reconstruction, and Classification of Dangerous Road Cracks

    No full text
    Advances in semiconductor technology and wireless sensor networks have permitted the development of automated inspection at diverse scales (machine, human, infrastructure, environment, etc.). However, automated identification of road cracks is still in its early stages. This is largely owing to the difficulty obtaining pavement photographs and the tiny size of flaws (cracks). The existence of pavement cracks and potholes reduces the value of the infrastructure, thus the severity of the fracture must be estimated. Annually, operators in many nations must audit thousands of kilometers of road to locate this degradation. This procedure is costly, sluggish, and produces fairly subjective results. The goal of this work is to create an efficient automated system for crack identification, extraction, and 3D reconstruction. The creation of crack-free roads is critical to preventing traffic deaths and saving lives. The proposed method consists of five major stages: detection of flaws after processing the input picture with the Gaussian filter, contrast adjustment, and ultimately, threshold-based segmentation. We created a database of road cracks to assess the efficacy of our proposed method. The result obtained are commendable and outperform previous state-of-the-art studies

    Leveraging Software-Defined Networking for a QoS-Aware Mobility Architecture for Named Data Networking

    No full text
    The internet’s future architecture, known as Named Data Networking (NDN), is a creative way to offer content-based services. NDN is more appropriate for content distribution because of its special characteristics, such as naming conventions for packets and methods for in-network caching. Mobility is one of the main study areas for this innovative internet architecture. The software-defined networking (SDN) method, which is employed to provide mobility management in NDN, is one of the feasible strategies. Decoupling the network control plane from the data plane creates an improved programmable platform and makes it possible for outside applications to specify how a network behaves. The SDN is a straightforward and scalable network due to its key characteristics, including programmability, flexibility, and decentralized control. To address the problem of consumer mobility, we proposed an efficient SDPCACM (software-defined proactive caching architecture for consumer mobility) in NDN that extends the SDN model to allow mobility control for the NDN architecture (NDNA), through which the MC (mobile consumer) receives the data proactively after handover while the MC is moving. When an MC is watching a real-time video in a state of mobility and changing their position from one attachment point to another, the controllers in the SDN preserve the network layout and topology as well as link metrics to transfer updated routes with the occurrence of the handoff or handover scenario, and through the proactive caching mechanism, the previous access router proactively sends the desired packets to the new connected routers. Furthermore, the intra-domain and inter-domain handover processing situations in the SDPCACM for NDNA are described here in detail. Moreover, we conduct a simulation of the proposed SDPCACM for NDN that offers an illustrative methodology and parameter configuration for virtual machines (VMs), OpenFlow switches, and an ODL controller. The simulation result demonstrates that the proposed scheme has significant improvements in terms of CPU usage, reduced delay time, jitter, throughput, and packet loss ratio

    Machine-Learning-Based COVID-19 Detection with Enhanced cGAN Technique Using X-ray Images

    No full text
    The coronavirus disease pandemic (COVID-19) is a contemporary disease. It first appeared in 2019 and has sparked a lot of attention in the public media and recent studies due to its rapid spread around the world in recent years and the fact that it has infected millions of individuals. Many people have died in such a short time. In recent years, several studies in artificial intelligence and machine learning have been published to aid clinicians in diagnosing and detecting viruses before they spread throughout the body, recovery monitoring, disease prediction, surveillance, tracking, and a variety of other applications. This paper aims to use chest X-ray images to diagnose and detect COVID-19 disease. The dataset used in this work is the COVID-19 RADIOGRAPHY DATABASE, which was released in 2020 and consisted of four classes. The work is conducted on two classes of interest: the normal class, which indicates that the person is not infected with the coronavirus, and the infected class, which suggests that the person is infected with the coronavirus. The COVID-19 classification indicates that the person has been infected with the coronavirus. Because of the large number of unbalanced images in both classes (more than 10,000 in the normal class and less than 4000 in the COVID-19 class), as well as the difficulties in obtaining or gathering more medical images, we took advantage of the generative network in this project to produce fresh samples that appear real to balance the quantity of photographs in each class. This paper used a conditional generative adversarial network (cGAN) to solve the problem. In the Data Preparation Section of the paper, the architecture of the employed cGAN will be explored in detail. As a classification model, we employed the VGG16. The Materials and Methods Section contains detailed information on the planning and hyperparameters. We put our improved model to the test on a test set of 20% of the total data. We achieved 99.76 percent correctness for both the GAN and the VGG16 models with a variety of preprocessing processes and hyperparameter settings
    corecore